在智能系统(例如自动驾驶和机器人导航)中,轨迹预测一直是一个长期存在的问题。最近在大规模基准测试的最新模型一直在迅速推动性能的极限,主要集中于提高预测准确性。但是,这些模型对效率的强调较少,这对于实时应用至关重要。本文提出了一个名为Gatraj的基于注意力的图形模型,其预测速度要高得多。代理的时空动力学,例如行人或车辆,是通过注意机制建模的。代理之间的相互作用是通过图卷积网络建模的。我们还实施了拉普拉斯混合物解码器,以减轻模式崩溃,并为每个代理生成多种模式预测。我们的模型以在多个开放数据集上测试的更高预测速度与最先进的模型相同的性能。
translated by 谷歌翻译
最近,与常规像素的隐性表示相比,视频的图像隐式神经表示,其有希望的结果和迅速的速度因其有希望的结果和迅速的速度而受欢迎。但是,网络结构内的冗余参数在扩大理想性能时会导致大型模型大小。这种现象的关键原因是神经的耦合公式,该公式直接从框架索引输入中输出视频帧的空间和时间信息。在本文中,我们提出了E-NERV,它通过将图像的隐式神经代表分解为单独的空间和时间上下文来显着加快神经的速度。在这种新公式的指导下,我们的模型大大降低了冗余模型参数,同时保留表示能力。我们从实验上发现,我们的方法可以通过更少的参数改善性能,从而使收敛的速度更快地提高了$ 8 \ times $。代码可在https://github.com/kyleleey/e-nerv上找到。
translated by 谷歌翻译
Temporal action detection (TAD) with end-to-end training often suffers from the pain of huge demand for computing resources due to long video duration. In this work, we propose an efficient temporal action detector (ETAD) that can train directly from video frames with extremely low GPU memory consumption. Our main idea is to minimize and balance the heavy computation among features and gradients in each training iteration. We propose to sequentially forward the snippet frame through the video encoder, and backward only a small necessary portion of gradients to update the encoder. To further alleviate the computational redundancy in training, we propose to dynamically sample only a small subset of proposals during training. Moreover, various sampling strategies and ratios are studied for both the encoder and detector. ETAD achieves state-of-the-art performance on TAD benchmarks with remarkable efficiency. On ActivityNet-1.3, training ETAD in 18 hours can reach 38.25% average mAP with only 1.3 GB memory consumption per video under end-to-end training. Our code will be publicly released.
translated by 谷歌翻译
参考图像分割是典型的多模模式任务,其目的在于为给定语言表达式中描述的参考生成二进制掩码。现有技术采用双峰解决方案,以编码器 - 融合解码器管道内的两种方式采用图像和语言。但是,由于两个原因,该管道对目标任务进行了次优。首先,它们仅保险熔断由单模编码器产生的高级别功能,其妨碍了足够的跨模型学习。其次,UNI-Modal编码器是独立预先培训的,这在预训练的UNI-DOMAL任务和目标多模态任务之间带来不一致。此外,这种管道经常忽略或几乎没有使用直观有益的实例级别功能。为了减轻这些问题,我们提出了邮件,这是一个更简洁的编码器解码器管道,具有掩码图像语言Trimodal编码器。具体而言,邮件将Uni-Modal特征提取器及其融合模型统一到深度模态交互编码器中,促进了不同模式的足够的特征交互。同时,邮件直接避免了第二个限制,因为不再需要单模编码器。此外,我们第一次提出将实例掩码介绍为额外的模态,这明确加强了实例级别特征并促使更精细的分段结果。该邮件在所有常用的引用图像分割数据集中设置了一种新的最先进的,包括Refcoco,Refcoco +和G-Ref,具有显着的收益,与以前的最佳方法为3%-10%。代码即将发布。
translated by 谷歌翻译
The data consistency for the physical forward model is crucial in inverse problems, especially in MR imaging reconstruction. The standard way is to unroll an iterative algorithm into a neural network with a forward model embedded. The forward model always changes in clinical practice, so the learning component's entanglement with the forward model makes the reconstruction hard to generalize. The proposed method is more generalizable for different MR acquisition settings by separating the forward model from the deep learning component. The deep learning-based proximal gradient descent was proposed to create a learned regularization term independent of the forward model. We applied the one-time trained regularization term to different MR acquisition settings to validate the proposed method and compared the reconstruction with the commonly used $\ell_1$ regularization. We showed ~3 dB improvement in the peak signal to noise ratio, compared with conventional $\ell_1$ regularized reconstruction. We demonstrated the flexibility of the proposed method in choosing different undersampling patterns. We also evaluated the effect of parameter tuning for the deep learning regularization.
translated by 谷歌翻译
Few-shot (FS) and zero-shot (ZS) learning are two different approaches for scaling temporal action detection (TAD) to new classes. The former adapts a pretrained vision model to a new task represented by as few as a single video per class, whilst the latter requires no training examples by exploiting a semantic description of the new class. In this work, we introduce a new multi-modality few-shot (MMFS) TAD problem, which can be considered as a marriage of FS-TAD and ZS-TAD by leveraging few-shot support videos and new class names jointly. To tackle this problem, we further introduce a novel MUlti-modality PromPt mETa-learning (MUPPET) method. This is enabled by efficiently bridging pretrained vision and language models whilst maximally reusing already learned capacity. Concretely, we construct multi-modal prompts by mapping support videos into the textual token space of a vision-language model using a meta-learned adapter-equipped visual semantics tokenizer. To tackle large intra-class variation, we further design a query feature regulation scheme. Extensive experiments on ActivityNetv1.3 and THUMOS14 demonstrate that our MUPPET outperforms state-of-the-art alternative methods, often by a large margin. We also show that our MUPPET can be easily extended to tackle the few-shot object detection problem and again achieves the state-of-the-art performance on MS-COCO dataset. The code will be available in https://github.com/sauradip/MUPPET
translated by 谷歌翻译
Data valuation, especially quantifying data value in algorithmic prediction and decision-making, is a fundamental problem in data trading scenarios. The most widely used method is to define the data Shapley and approximate it by means of the permutation sampling algorithm. To make up for the large estimation variance of the permutation sampling that hinders the development of the data marketplace, we propose a more robust data valuation method using stratified sampling, named variance reduced data Shapley (VRDS for short). We theoretically show how to stratify, how many samples are taken at each stratum, and the sample complexity analysis of VRDS. Finally, the effectiveness of VRDS is illustrated in different types of datasets and data removal applications.
translated by 谷歌翻译
在高光谱图像分类(HSI)任务中,忽略了包括有关土地覆盖类别的大量先验知识在内的文本信息。有必要探索语言模式在协助HSI分类方面的有效性。此外,大规模训练的图像文本基础模型在各种下游应用中都表现出了出色的性能,包括零拍传输。但是,大多数领域的概括方法从未解决过采矿语言模态知识以提高模型的概括性能。为了弥补上述不足的不足,提出了一个语言感知的域概括网络(LDGNET),以从跨域共享的先验知识中学习跨域不变的表示。所提出的方法仅在源域(SD)上训练,然后将模型传输到目标域(TD)。包括图像编码器和文本编码器在内的双流架构用于提取视觉和语言特征,其中粗粒和细粒度的文本表示旨在提取两个层次的语言特征。此外,语言特征被用作跨域共享的语义空间,并且通过在语义空间中的对比度学习完成视觉语言对齐。与最先进的技术相比,三个数据集上的广泛实验证明了该方法的优越性。
translated by 谷歌翻译
合作感知的想法是从多辆车之间的共同感知数据中受益,并克服单车上车载传感器的局限性。但是,由于本地化不准确,通信带宽和模棱两可的融合,多车信息的融合仍然具有挑战性。过去的实践通过放置精确的GNSS定位系统来简化问题,手动指定连接的车辆数量并确定融合策略。本文提出了一个基于地图的合作感​​知框架,名为MAP容器,以提高合作感的准确性和鲁棒性,最终克服了这个问题。概念“地图容器”表示地图是将所有信息转换为地图坐标空间的平台,并将不同的信息源合并到分布式融合体系结构中。在拟议的MAP容器中,考虑使用GNSS信号和传感器功能和地图功能之间的匹配关系以优化环境状态的估计。对仿真数据集和房地车平台的评估结果验证了所提出的方法的有效性。
translated by 谷歌翻译
雷达和摄像机多模式融合的环境感知对于自动驾驶至关重要,以提高准确性,完整性和稳健性。本文着重于如何利用毫米波(MMW)雷达和相机传感器融合进行3D对象检测。提出了一种新的方法,该方法在提出了更好的特征表示形式下意识到在鸟眼视图(BEV)下的特征级融合。首先,将雷达特征通过时间积累增强,并发送到时间空间编码器以进行雷达特征提取。同时,通过图像骨干和颈部模型获得了适应各种空间尺度的多尺度图像2D特征。然后,将图像功能转换为使用设计的视图变压器。此外,这项工作将多模式特征与称为点融合和ROI融合的两阶段融合模型融合在一起。最后,检测头会回归对象类别和3D位置。实验结果表明,所提出的方法在最重要的检测指标,平均平均精度(MAP)和NUSCENES检测分数(NDS)下实现了最先进的性能。
translated by 谷歌翻译